7 research outputs found

    Design, control and implementation of CoCoA: a human-friendly autonomous service robot

    Get PDF
    The growing demand to automate everyday tasks combined with the rapid development of software technologies that can furnish service robots with a large repertoire of skills, are driving the need for design and implementation of human-friendly service robots, i.e., safe and dependable machines operating in the close vicinity of humans or directly interacting with them in social domains. The technological shift from classical industrial robots utilized in structured factory oors to service robots that are used in close collaboration with humans introduces many demanding challenges to ensure safety and autonomy of operation of such robots. In this thesis, we present mechanical design, modeling and software integration for motion/navigation planning, and human-collaborative control of a human-friendly service robot CoCoA: Cognitive Collaborative Assistant. CoCoA is designed to be bimanual with dual 7 degrees-of-freedom (DoF) anthropomorphic arms, featuring spherical wrists. Each arm weighs less than 1.6 kg and possesses a payload capacity of 1 kg. Bowden-cable based transmissions are used for the arms to enable grounding of motors and this arrangement results in lightweight arms with passive back-driveability. Thanks to passive back-driveability and low inertia of its arms, the operation of CoCoA is guaranteed to be safe not only during physical interactions, but also under collisions with the robot arms. The holonomic base of Co- CoA possesses four driven and steered wheel modules and is compatible with wheelchair accessible environments. CoCoA also features a single DoF torso, and dual one DoF grippers, resulting in a service robot with a total of 25 active DoF. The dynamic/kinematic/geometric models of CoCoA are derived in open source software. Inverse kinematics, stable grasp, kinematic reachability and inverse reachability databases are generated for the robot to enable computation of kinematically-feasible collision-free motion/grasp plans for its arms/grippers and navigation plans for its holonomic base, at interactive rates. For the real-time control of the robot, motion/navigation plans characterizing feasible joint trajectories are passed to feedback controllers dedicated to each joint. The joint space control of each joint is implemented in hardware, while communication/synchronization among di erent DoF is ensured through EtherCAT/RS-485 eldbuses running at high sampling rates. To comply with human movements under physical interactions and to enable human collaborative contour tracking tasks, CoCoA also implements passive velocity eld control that guarantees user safety by ensuring passivity of interaction with respect to externally applied forces. The feasibility of the design and the applicability of the overall planning and control framework are demonstrated through dynamic simulations and physical implementations of several service robotics scenarios

    Explainable robotic plan execution monitoring under partial observability

    No full text
    Successful plan generation for autonomous systems is necessary but not sufficient to guarantee reaching a goal state by an execution of a plan. Various discrepancies between an expected state and the observed state may occur during the plan execution (e.g., due to failure of robot parts) and these discrepancies may lead to plan failures. For that reason, autonomous systems should be equipped with execution monitoring algorithms so that they can autonomously recover from such discrepancies. We introduce a plan execution monitoring algorithm that operates under partial observability. This algorithm relies on novel formal methods for hybrid prediction, diagnosis and explanation generation, and planning. The prediction module generates an expected state after the execution of a part of the plan from an incomplete state, to check for discrepancies. The diagnostic reasoning module generates meaningful hypotheses to explain failures of robot parts. Unlike the existing diagnosis methods, the previous hypotheses can be revised, based on new partial observations, increasing the accuracy of explanations as further information becomes available. The replanning module considers these explanations while computing a new plan that would avoid such failures. All these reasoning modules are hybrid in that they combine high-level logical reasoning with low-level feasibility checks based on probabilistic methods. We experimentally show that these hybrid reasoning modules improve the performance of plan execution monitoring in service robotics applications with multiple bimanual mobile robots. To evaluate the performance and to understand the applicability of the proposed execution monitoring algorithm, we introduce an execution simulation algorithm. This algorithm is based on a formal method that allows generation of dynamic and relevant discrepancies, and simulation of all possible plan execution scenarios considering potential failures. This simulation algorithm can be used not only for testing execution monitoring algorithms subject to different conditions, but also to evaluate the robustness of plans. We illustrate these applications of our simulation algorithm in service robotics and cognitive factory settings with multiple mobile robots

    A formal approach to discrepancy generation for systematic testing of execution monitoring algorithms in simulation

    No full text
    Successful plan generation for autonomous systems is necessary but not sufficient to guarantee reach a goal state by an execution of the plan, since various discrepancies between the expected state and the observed state may occur during the plan execution (e.g., due to unexpected exogenous events, changes in the goals, or failure of robot parts) and these discrepancies may lead to plan failures. For that reason, these systems should be equipped with execution monitoring algorithms so that they can autonomously recover from such discrepancies. Before execution monitoring algorithms are deployed on autonomous systems, comprehensive testing and simulation is needed to evaluate their performances and to understand their applicability. With this motivation, we introduce formal methods for discrepancy generation with respect to the plan being executed and by utilizing feasibility checks of robotic actions, and we propose a novel generic algorithm for simulation of execution monitoring algorithms that enables systematic testing of them in simulation. We illustrate an application of our methods on an execution monitoring algorithm that involves guided replanning and diagnosis, in the context of some service robotics scenarios that utilize multiple bi-manual mobile manipulators

    Explainable robotic plan execution monitoring under partial observability

    No full text
    Successful plan generation for autonomous systems is necessary but not sufficient to guarantee reaching a goal state by an execution of a plan. Various discrepancies between an expected state and the observed state may occur during the plan execution (e.g., due to unexpected exogenous events, changes in the goals, or failure of robot parts) and these discrepancies may lead to plan failures. For that reason, autonomous systems should be equipped with execution monitoring algorithms so that they can autonomously recover from such discrepancies. We introduce a plan execution monitoring algorithm that operates under partial observability. This algorithm relies on novel formal methods for hybrid prediction, diagnosis and explanation generation, and planning. The prediction module generates an expected state after the execution of a part of the plan from an incomplete state to check for discrepancies. The diagnostic reasoning module generates meaningful hypotheses to explain failures of robot parts. Unlike the existing diagnosis methods, the previous hypotheses can be revised, based on new partial observations, increasing the accuracy of explanations as further information becomes available. The replanning module considers these explanations while computing a new plan that would avoid such failures. All these reasoning modules are hybrid in that they combine high-level logical reasoning with low-level feasibility checks based on probabilistic methods. We experimentally show that these hybrid formal reasoning modules improve the performance of plan execution monitoring
    corecore